Scheduling of MPI Applications: Self-co-scheduling

نویسندگان

  • Gladys Utrera
  • Julita Corbalán
  • Jesús Labarta
چکیده

Scheduling parallel jobs has been an active investigation area. The scheduler has to deal with heterogeneous workloads and try to obtain throughputs and response times such that ensures good performance. We propose a Dynamic Space-Sharing Scheduling technique, the Self Co-Scheduling, based on the combination of the best benefits from Static Space Sharing and Co-Scheduling. A job is allocated a processors partition where its number of processes can be greater than the number of processors. As MPI jobs aren’t malleable, we make the job contend with itself for the use of processors applying Co-Scheduling. The goal of this paper is to evaluate and compare the impact of contending for resources among jobs and with the job itself. We demonstrate that our Self Co-Scheduling technique has better performance and stability than other Time Sharing Scheduling techniques, especially when working with high communication degree workloads, heavy loaded machines and high multiprogramming level.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Co-allocation of MPI Jobs with the VIOLA Grid MetaScheduling Framework

The co-allocation of resources for the parallel execution of distributed MPI applications in a Grid environment is a challenging task. On one hand it is mandatory to co-ordinate the usage of computational resources, like for example compute clusters, on the other hand it improves the additional scheduling of network resources the overall performance. Most Grid middlewares do not include such me...

متن کامل

Designing Parallel Loop Self-Scheduling Schemes by the Hybrid MPI and OpenMP Model for Grid Systems with Multi-Core Computational Nodes

Loop scheduling on parallel and distributed systems has been thoroughly investigated in the past. However, none of them considers the feature of multicore architecture dominating the current markets of desktop computers, laptop computers, servers, etc. On the other hand, although there have been many studies proposed to employ the hybrid MPI and OpenMP programming model to exploit different lev...

متن کامل

Building MPI for Multi-Programming Systems Using Implicit Information

With the growing importance of fast system area networks in the parallel community, it is becoming common for message passing programs to run in multi-programming environments. Competing sequential and parallel jobs can distort the global coordination of communicating processes. In this paper, we describe our implementation of MPI using implicit information for global coscheduling. Our results ...

متن کامل

MPC: A Unified Parallel Runtime for Clusters of NUMA Machines

Over the last decade, Message Passing Interface (MPI) has become a very successful parallel programming environment for distributed memory architectures such as clusters. However, the architecture of cluster node is currently evolving from small symmetric shared memory multiprocessors towards massively multicore, Non-Uniform Memory Access (NUMA) hardware. Although regular MPI implementations ar...

متن کامل

Data Replication-Based Scheduling in Cloud Computing Environment

Abstract— High-performance computing and vast storage are two key factors required for executing data-intensive applications. In comparison with traditional distributed systems like data grid, cloud computing provides these factors in a more affordable, scalable and elastic platform. Furthermore, accessing data files is critical for performing such applications. Sometimes accessing data becomes...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2004